1,101 research outputs found

    Evaluating methodological quality of Prognostic models Including Patient-reported HeAlth outcomes in oncologY (EPIPHANY): A systematic review protocol

    Get PDF
    Introduction While there is mounting evidence of the independent prognostic value of patient-reported outcomes (PROs) for overall survival (OS) in patients with cancer, it is known that the conduct of these studies may hold a number of methodological challenges. The aim of this systematic review is to evaluate the quality of published studies in this research area, in order to identify methodological and statistical issues deserving special attention and to also possibly provide evidence-based recommendations. Methods and analysis An electronic search strategy will be performed in PubMed to identify studies developing or validating a prognostic model which includes PROs as predictors. Two reviewers will independently be involved in data collection using a predefined and standardised data extraction form including information related to study characteristics, PROs measures used and multivariable prognostic models. Studies selection will be reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, with data extraction form using fields from the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies (CHARMS) checklist for multivariable models. Methodological quality assessment will also be performed and will be based on prespecified domains of the CHARMS checklist. As a substantial heterogeneity of included studies is expected, a narrative evidence synthesis will also be provided. Ethics and dissemination Given that this systematic review will use only published data, ethical permissions will not be required. Findings from this review will be published in peer-reviewed scientific journals and presented at major international conferences. We anticipate that this review will contribute to identify key areas of improvement for conducting and reporting prognostic factor analyses with PROs in oncology and will lay the groundwork for developing future evidence-based recommendations in this area of research. Prospero registration number CRD42018099160

    Stability of clinical prediction models developed using statistical or machine learning methods

    Get PDF
    Clinical prediction models estimate an individual's risk of a particular health outcome, conditional on their values of multiple predictors. A developed model is a consequence of the development dataset and the chosen model building strategy, including the sample size, number of predictors and analysis method (e.g., regression or machine learning). Here, we raise the concern that many models are developed using small datasets that lead to instability in the model and its predictions (estimated risks). We define four levels of model stability in estimated risks moving from the overall mean to the individual level. Then, through simulation and case studies of statistical and machine learning approaches, we show instability in a model's estimated risks is often considerable, and ultimately manifests itself as miscalibration of predictions in new data. Therefore, we recommend researchers should always examine instability at the model development stage and propose instability plots and measures to do so. This entails repeating the model building steps (those used in the development of the original prediction model) in each of multiple (e.g., 1000) bootstrap samples, to produce multiple bootstrap models, and then deriving (i) a prediction instability plot of bootstrap model predictions (y-axis) versus original model predictions (x-axis), (ii) a calibration instability plot showing calibration curves for the bootstrap models in the original sample; and (iii) the instability index, which is the mean absolute difference between individuals' original and bootstrap model predictions. A case study is used to illustrate how these instability assessments help reassure (or not) whether model predictions are likely to be reliable (or not), whilst also informing a model's critical appraisal (risk of bias rating), fairness assessment and further validation requirements.Comment: 30 pages, 7 Figure

    An independent external validation and evaluation of QRISK cardiovascular risk prediction: a prospective open cohort study

    Get PDF
    Objective To independently evaluate the performance of the QRISK score for predicting 10 year risk of cardiovascular disease in an independent UK cohort of patients from general practice and compare the performance with Framingham equations
    • …
    corecore